Mikaela Angelina Uy
Towards Few-View Reconstruction and Generation of User-Controllable 3D Geometry for Extended Realities
Research Abstract:
I envision a future where our experiences and interactions are not limited to physical reality and where extended realities (AR/VR/MR) play bigger parts of our daily lives. My research interests revolve around realizing this future – I primarily work on the representation and generation of 3D objects/scenes, covering both i) explicits, e.g. meshes and CAD models, and ii) implicits, e.g. neural radiance fields (NeRF’s) for user-controllable 3D reconstruction and content creation. Explicit representations take advantage of classical geometry processing machinery, e.g. deformation and shape priors, that allow for the creation of high- quality 3D geometry, but unfortunately are harder to directly optimize in neural networks especially with user controllability. On the other hand, implicits, such as NeRF’s, are easier to learn and optimize with their baked-in smoothness prior, but may lose the fine-grained geometric details as well explicit control handles, such as keypoints, that are present in meshes and CAD models. My research goal is to unify these 3D representations and leverage the advantages of both, to achieve high-quality, photorealistic reconstruction and user-controllable generation. Taking a step further, I am particularly interested in the creation of new variations of objects and scenes by leveraging on existing artist-created assets (3D models) or in-the-wild sensor data (images or scans) to generate new high-quality 3D content through AI-assisted technology. These are all essential in extended reality applications where the physical and virtual world coexist. Differentiating from most other 3D works, my work focuses on connecting classical techniques in geometry processing and computer vision, and adapt these to end-to-end trainable neural network frameworks to achieve mathematically-grounded and fundamentally sound learning-based approaches.
Bio:
Mikaela Angelina Uy is a fourth year PhD student in Stanford University advised by Leonidas Guibas. She works on the intersection of computer vision, graphics, geometry processing and machine learning. Specifically she is interested in diving into different representations of 3D objects and scenes for various downstream tasks such as deformation, reconstruction, controllable generation and variation synthesis. She is particularly drawn to designing methods that connect classical techniques to learning-based approaches that are fundamentally-grounded and mathematically-inspired. She is a recipient of the 2023 Apple Scholars in AI/ML PhD Fellowship and the 2022 Snap Research Fellowship. Previously, she has interned in Adobe, Autodesk and Google, and she obtained her Bachelor’s degree in Mathematics and Computer Science from the Hong Kong University of Science and Technology.